In recent years the field of neuromorphic low-power systems that consumeorders of magnitude less power gained significant momentum. However, theirwider use is still hindered by the lack of algorithms that can harness thestrengths of such architectures. While neuromorphic adaptations ofrepresentation learning algorithms are now emerging, efficient processing oftemporal sequences or variable length-inputs remain difficult. Recurrent neuralnetworks (RNN) are widely used in machine learning to solve a variety ofsequence learning tasks. In this work we present a train-and-constrainmethodology that enables the mapping of machine learned (Elman) RNNs on asubstrate of spiking neurons, while being compatible with the capabilities ofcurrent and near-future neuromorphic systems. This "train-and-constrain" methodconsists of first training RNNs using backpropagation through time, thendiscretizing the weights and finally converting them to spiking RNNs bymatching the responses of artificial neurons with those of the spiking neurons.We demonstrate our approach by mapping a natural language processing task(question classification), where we demonstrate the entire mapping process ofthe recurrent layer of the network on IBM's Neurosynaptic System "TrueNorth", aspike-based digital neuromorphic hardware architecture. TrueNorth imposesspecific constraints on connectivity, neural and synaptic parameters. Tosatisfy these constraints, it was necessary to discretize the synaptic weightsand neural activities to 16 levels, and to limit fan-in to 64 inputs. We findthat short synaptic delays are sufficient to implement the dynamical (temporal)aspect of the RNN in the question classification task. The hardware-constrainedmodel achieved 74% accuracy in question classification while using less than0.025% of the cores on one TrueNorth chip, resulting in an estimated powerconsumption of ~17 uW.
展开▼